The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
诸如随机平滑之类的认证防御能力已显示出对$ \ ell_p $ norm边界攻击构建可靠的机器学习系统的承诺。但是,现有方法不足或无法证明对语义转换,尤其是那些没有封闭形式表达的语义转换(例如depocus Blur和像素化),这在实践中更常见,而且通常不受限制。为了填补这一空白,我们提出了广义随机平滑(GSMOOTH),这是一个统一的理论框架,可通过新颖的维度增强策略来证明对一般语义转换的鲁棒性。在GSMooth框架下,我们提出了一种可扩展的算法,该算法使用替代图像到图像网络来近似复杂的转换。替代模型为研究语义转换的属性和证明鲁棒性提供了强大的工具。几个数据集的实验结果证明了我们对多种语义转换和腐败的鲁棒性认证方法的有效性,这是替代基线无法实现的。
translated by 谷歌翻译
我们介绍了移动前的Mobilenet和Transformer的平行设计,在两侧桥。该结构利用MobileNet在全局互动下在局部加工和变压器处的优点。而且桥梁可以实现本地和全局特征的双向融合。不同于近期Vision变形金机的作品,移动设备中的变压器包含很少的令牌(例如6或更少的令牌),这些代币被随机初始化以学习全球前沿,导致计算成本低。结合所提出的轻量度跨关注模型桥梁,移动前不仅是计算高效的,而且还有更多的表示力量。它在从25米到500米到500米拖鞋的低浮圈制度以25米到500米的潮流表现出MobileNetv3。例如,移动前者在294米的拖鞋处获得77.9 \%的前1个精度,获得1.3 \%的MobileNetv3,但节省了17 \%的计算。当传输到对象检测时,移动式以前从RetinAnet框架中占MobileNetv3到8.6 AP。此外,我们通过用移动设备替换DETR中的骨干,编码器和解码器来构建高效的端到端探测器,该骨干,其优于12个AP,但节省了52 \%的计算成本和36 \%的参数。
translated by 谷歌翻译
基于转移的对抗攻击可以评估黑框设置中的模型鲁棒性。几种方法表现出令人印象深刻的非目标转移性,但是,有效地产生有针对性的可转移性仍然具有挑战性。为此,我们开发了一个简单而有效的框架,以应用层次生成网络制作有针对性的基于转移的对抗性示例。特别是,我们有助于适应多级目标攻击的摊销设计。对Imagenet的广泛实验表明,我们的方法通过与现有方法相比,大幅度的余量提高了目标黑盒攻击的成功率 - 它的平均成功率为29.1 \%,而仅基于一个替代白盒的六种不同模型模型,大大优于最先进的基于梯度的攻击方法。此外,与基于梯度的方法相比,所提出的方法超出了数量级的效率也更有效。
translated by 谷歌翻译
正确分类对抗性示例是安全部署机器学习模型的必不可少但具有挑战性的要求。据抢救模型甚至是最先进的离职训练的模型,在CIFAR-10上努力超过67%的强大测试精度,这远非实用。互动的互补方法是引入拒绝选项,允许模型不返回对不确定输入的预测,自信是常用的确定性代理。随着这个例程,我们发现置信度和纠正的置信度(R-Con)可以形成两个耦合的拒绝度量,这可以从正确分类的次数中可以证明错误分类的输入。这种有趣的属性揭示了使用偶联策略来更好地检测和抑制对抗性实例。我们在包括自适应攻击的若干攻击下,在CiFar-10,CiFar-10-C和CiFar-100上评估我们的整流拒绝(RR)模块,并证明RR模块与改善稳健性的不同的对抗训练框架兼容额外的计算。代码可在https://github.com/p2333/Rectified-re注意到。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译